Right on time

Most projects fail because teams run out of time. They run out of time to deliver better code, to deliver tests, to deliver (almost) bug-free software. And the crying starts: it’s because the customer, it’s because the time, it’s because the tools, the hardware, the existing system is a mess… and so forth. Complaining will not fix the situation. You’ve failed. You’d better do a post-mortem analysis and try to learn from your failures to avoid committing them again.

I’ve always said that committing mistakes it’s not bad; committing the same error twice it’s the devil thing. So, try to commit new mistakes every time!

Software development is like a marathon: it’s a long-running race, so it’s better to save strength and energy for the final meters. Most software teams work long hours, getting fed up soon and quitting from the project or, even worse, from the company. That’s not good. Long hours should be considered like a failure of finishing our commitments on time. Long hours should be the exception, not the rule.

I’ve been working on a software factory company on a team developing features every release (every month). Those features involve 3 – 5 people, a month of work and I’ve always been successful. On the other hand, I’ve seen many mistakes many times, so I wanted to share my experience about this, and where I see that others fail where I am successful, IMHO.

Continuing with athletics, take iterative, incremental, feature-development management like a 100-meter race. The key is: don’t never ever relax.

So, software development is like a marathon, feature development is like a 100-meter race. Get it? In other words, software development is like a war, feature development is a battle… whatever, I’m rambling…

Like 100-meter races, feature development management consists of:

  • Start tough
  • Keep it up during the race
  • Don’t relax at the end

Start tough

Most teams fail at the very early stages, due to the «paralysis of the analysis». Lots of uncertainty is around when gathering requirements and trying to estimate tasks, dividing user stories and so forth.

Don’t estimate too early. Realize that you don’t know almost nothing of the software you have to build and work on that uncertainty. Don’t estimate till you have taken a look into the code you’ll have to deal with.

Use prototypes, mock ups, UML diagrams… whatever you find useful to make the problem understandable. Share your drafts with your analysts  or customers to reduce uncertainty. Try to explain what you have to do to your analyst, to check that you’ve got it. Involve the customer. Stress this till you get their attention. Try to make him aware of the importance of being involved, specially in the early phases of development.

Don’t get stucked. While you wait for some answers, try to formulate more questions, work on other requirements, diagrams, see the code you’ll have to work with.

Don’t be fooled by a far deadline. Define objectives for everyday, so you can measure progress from the very beginning.

Try to assign work to developers as soon as possible, and involve them in the early stages of development, so that they are not mere «programmers» but they think about what they are doing and think whether some solution makes or doesn’t makes sense: adds value to the customer or does not.

Keep it up in the middle

As soon as you get «velocity» and things start going well, you’ll have the sensation that you can relax a little bit, because «you are doing well». Don’t. Things always get complicated at the end: requirement changes, bugs, forgotten requirements, non-functional requirements not fit (performance!)…

Keep the pace. The earlier you finish your iterations, the better. You’ll get more time for the Unforeseen at latter stages.

Don’t relax at the end

https://luixrodriguezneches.files.wordpress.com/2017/03/377b4-casi.gif

The final stages are vital, and lots of things (specially stupid things) can happen:

  • Someone forgot to commit that bugfix
  • Someone forgot to push that bugfix
  • Someone fogot to merge that bugfix
  • Someone forgot to version that file
  • Someone forgot to include that configuration change in the deployment instructions
  • Someone understood incorrectly some requirement change.
  • Someone assumed that someone else would be responsible for THAT small tweak in the code.
  • The list grows.

It’s too bad to trash your hard work at this stage just for those naive mistakes…

So, when you’re about to deliver, sleep well, eat well and get some coffee. Make sure you are pretty awake and check every single thing. Control it. Double check. Ask. Twice, if necessary. Don’t leave anything loose, keep everything fastened.

Some more pieces of advice

Measure the progress all the time.

Involve the customers, analysts, testers and developers. All of you are on the same boat, all of you have the same aim, so all of you must point on the same direction.

https://i0.wp.com/olegif.com/bin/gifs/00/14/64.gif

Try to define objectives every day and check if you achieved them at the end of the day or why you didn’t achieve them. If you didn’t, you have a risk. Make a plan to handle it. This strategy will help you to detect risks very soon, before they are out of control. Be disciplined and stay alert.

Share the status with your team, so they get involved and aware of the situation. Make them feel part of the management, of the goal.

Don’t force you developers to work long hours. Enforce that they do their own commitments. You won’t get disappointed. Developers LOVE achievable challenges, specially when they feel that they have signed to them. Make them sign their own commitments and remember them that those are their commitments. It’s their word. Their honor. If eventually they don’t comply their commitments, handle that as a risk: review your plan and your estimations, but don’t get angry with your team; be patient.

You are a facilitator. A guardian. Don’t let your team be interrupted externally. You are the focal point of your team. You are the bodyguard, the shepherd dog. You control the communication channels: the executives channel, the customers channel, the Q&A team channel, the analysts channel. There’s nothing more annoying for developers that get distracted from something else than… development. Don’t get me wrong: developers don’t work isolated; they constantly talk to each other but… they talk about development, nothing else. Talk to them to be sure that they don’t get stucked. Fix their problems, find solutions, do whatever is needed to make them work easily. Provide good chairs, big tables, hardware… whatever your team need. Don’t wait till they ask you. Take a look at them and figure out what they could need.

——

I’ve tried to keep in mind these topics when leading developments and the number of long hours have always been minimal, the team members were highly-motivated and they felt part of the big issue, providing ideas, designs and detecting problems in the solutions upfront. Team members had time to sleep, go on with their personal lives, eat well and have fun. That’s what marathon runners do. Saving energy for the future.

My essential bookshelf

Regarding programming and software engineering, I was recently thinking about the following topic: If I had to choose the books that have influenced me at most, which ones would I choose?

I found the topic interesting in itself, so I changed the question for this one: If I had to give some bibliography to a newbie [developer], which books would I recommend?

So, this is my list. I’m really proud of it. It has nothing special, you can find some similarities here, indeed. By the way, «Coding Horror» is one of my favourite blogs and I’m a fan of Jeff Atwood. Maybe some day I’ll talk about my favourite blogs… So, going back to the meat, er… the list!

  • Peopleware, Timothy Lister & Tom DeMarco
  • The Pragmatic Programmer, Andy Hunt
  • Code Complete 2, Steve McConnell
  • The Clean Coder, Robert C. Martin
  • Clean Code, Robert C. Martin
  • Agile Principles and Patterns, Robert C. Martin
  • Software Estimation: demystifying the black art, Steve McConnell
  • Refactoring: Improving the Design of existing code, Martin Fowler
  • Implementation Patterns, Kent Beck
  • Test-Driven Development by example, Kent Beck
  • Design Patterns, Erich Gamma, et al.
  • Rapid Development: Taming Wild Schedules, Steve McConnell
  • The Mythical Man-Month, Frederick Brooks
  • Software Configuration Management Patterns, Steve Berczuk
  • C# in depth, Jon Skeet

Well, the last one is only recommended if you are a C# programmer with a little bit of experience, and the the Berczuk’s is a good book as long as you’re interested in Software Configuration Management by itself (and not just as a tool). Since all of us work with a software version control system… I think it’s a good idea to know a little bit further of that topic.

You’ll realize that the list is not provided in a specific order. That’s on purpose. I think these ones are good damn books, no matter the order, no matter which one. If I were you, I would read’em all; I’d just pick one and start reading.

They have influenced me so deep and taught me so much that I try to apply what they say every single day at work. Like Robert C. Martin says in «The Clean Coder», programming at work is like a violinist playing: a musician trains at home and does his best at a concert. I try to do that, practicing at home and doing my best at work, so when I find some problem or some particular situation I think about Robert Martin, or Kent Beck or Martin Fowler the same way as a violinist thinks about Vivaldi or Mozart or Bach, I suppose.

I’m really proud of these craftsmen. They all have made me love the craftsmanship of software development.

Implementing for today (not for tomorrow)

When we start programming professionally for some company it’s too easy to fall in the situation of implementing something with too many just in case’s, or what if’s.

I used to say that we were taught that way when we studied Software Engineering or Programming, but now I’m not that sure about it. I think that we just didn’t get the point. When we were told about writing reusable code we automatically thought: «okay, this is about writing code today so that I can reuse it tomorrow, right?».

Then we realize that we really don’t know what the requirements will be tomorrow, and we end up with lots of awesome classes and methods and interfaces we wrote that are now completely useless. Now we need to maintain them and they are difficult to adapt, so in the short term they start to smell badly and within a couple of months that code is completely rotten.

So, the solution seems to be easy at first:

Implement to fit the requirements of today. As long as your code fits the SOLID principles, KISS and DRY and a couple more, you are safe and sound.

First thing first: we software engineers are cool when inventing acronyms; aren’t we?

Okay, but let’s go to the point…

The problem is that sometimes you come up with some solution that is «surely» SRP but only «poorly» OCP, «reasonable» LSP and «yeah, why not?» DIP.

Then you ask some workmate and he suggest an approach that is «pityful» SRP but a «confident 100%» OCP.

Regarding patterns, sometimes Iterator is almost okay but it makes the «iterated» a little bit couple to the «iterator». Sometimes the best approach is not easy: Adapter, Mediator or Proxy?

Regarding object oriented detailed design: inheritance, interfacing, delegation or just composition?

Which one is better?

Let’s consider an example like the following: you have to implement a system that handles employees. Some employees are paid in a month rate and other ones by hour, some other ones have a basic income and the rest depends on the sales rate they achieve in that month. Finally some other ones can choose how they want to be paid: by hour or in a month rate.

Regarding the database, it’s easy to identify an inheritance in the Entity Relationship diagram. Inheritance is designed in one of 3 ways:

  • Implement the parent table and the children tables
  • Implement only the parent table
  • Implement only the children

Regarding the code, you can go for the Template Method pattern and use inheritance with an abstract Employee class and the actual Employee types inheriting from it or go for a Strategy pattern and use delegation (assuming .NET programming) to calculate the income.

That’s what it comes to my mind as a first approach.

So, which one is better?

It depends.

It depends on what? We implement for today, don’t we? So there must be a perfect solution to this problem.

But silver bullets never ever exist in software, d’you remember?

The trick here for experienced developers is to anticipate to the change somehow. If I was in such a situation like the one depicted above I would ask my customer something like:

  • Is it possible that a customer changes his / her income mode?
  • Is it possible that new payment options arise?
  • Is all the employees data required to be loaded in every operation?

Going back to the database, we know that if we use only one EMPLOYEE table we will have lots of nulls for those columns that don’t apply for some specific payments. That’s not cool, we are wasting lots of disk space. If we use the «children-only» option then we will have lots of redundancy in the common columns (again, a waste). Finally, if we use the parent table AND the children we will need a JOIN to load all the employee information, which may damage the performance of the overall system.

About the code, we all know that inheritance is hard to maintain because if new children classes appear and they don’t fit in the common parent we start to have several levels of hierarchy and in a couple of years we have a hierarchy too rigid, hard to maintain, hard to test and hard to refactor without breaking anything. On the other hand, if payment options don’t change then the Template Method may be easier to understand and more natural and elegant to use than the Strategy pattern.

So, software designs aren’t often good for every single situation. Solutions are good for some requirements in a certain point in time, and we should base our decisions on that. As long as we have a clean design, if eventually we need to do a change and then we realize that our code is not prepared for that kind of change, we refactor with the support of our tests. But let’s face it: there’s no perfect design. It depends on the requirements. Even the very best piece of software design may fail the SOLID principles under some twisted change of requirements. Let’s embrace refactor, then.

The important thing to get here is that, when we need to give some solution to a certain problem we need to find several solutions so that the right questions arise and therefore those questions will lead us to choose the more appropriate solution according to the answers we get. Then we choose the simplest, cleanest, SOLID-est solution ever possible for today’s requirements, anticipating tomorrow changes if (and only if) we have a strong certainty what those are.

Mentoring

Currently I’m working in a team where almost everyone has fewer experience than me regarding .NET technologies and Web development. I have to recognize that I have been programming in .NET for about ten years by now (not all the time, I hate when I’m asked this question when applying for a job position because I consider it plain stupid: no one does the same thing for more than a couple of months).

But I still have little experience in web, just a couple of years of ASP.NET development and a couple of years more in PHP when I started to work as a professional developer… and I really remember little of PHP programming.

So, at the very beginning we had to get things done and we were little experienced. I put in the senior developer shoes and started studying ORMs, Dependency Injection tools, the new stuff in ASP.NET MVC 5, jQuery, Bootstrap, AngularJS, SharePoint…

And started to train the rest of the people in the things I read and learnt on weekends. Did that made my an expert in something? Absolutely not, I just wanted to share what I was learning with my pals.

Eventually I had the opportunity to do some speech in a software event. Several of my colleagues were invited to participate as well, but they were a little bit reluctant at the very beginning:

– What if there’s someone who knows more than me and corrects me or fools me in public?

I haven’t talked in public more than just a few times, but I’ve never found someone that rude. People go to events with a positive feeling of learning something or sharing some insights or knowledge about the topics covered in the talk; it rarely happens that some expert in something goes to some talk just to fool someone… and I think that the audience would not have fun with such a troll.

If that eventually happens, there are ways to workaround uncomfortable situations:

– Hum, I’m not sure about that; maybe you’re right.

– I agree with you at some extent, but…

– I don’t have the answer right now; lemme check it and I’ll come back to you then; could you give me your e-mail address after the talk, please?

– I see your point but you have to consider that…

– If you don’t mind I’ll be glad to discuss this issue with you after the talk, now let’s move on to the next topic so that the audience don’t get bored and we don’t run out of time, I hope you understand, d’you agree?

And don’t be afraid of saying that you’re wrong. Be humble upfront. If you feel that you know a little bit about a topic then share it! You can just start being humble: «hey guys, I’ve worked with this some time but I don’t know yet all the gory details about it, you know». Besides, realizing that you’re wrong some time after you wrote or said something is ok, it means you’ve continued working on that, it means that you’ve learnt, you have a deeper understanding on the topic. That’s an improvement, good for you!

Please, don’t get me wrong: I’m not saying that it’s good to talk about something by merely reading about it. It’s good to train, to play, to research a little bit further and then talk about it or teach other people about it.

To give an example I was asking my workmates to avoid conversions such as «.ToList()» just to iterate on a collection with a foreach loop because it’s non-sense, but then I was introduced to EntityFramework and the problem of buffering a query in EF to avoid keeping connections open, so I had to go back and say: «Hey dude, when I said: don’t use .ToList() never, ever… ok, when it’s about EF things are not that easy…».

So, I’d like to emphasize: share your knowledge with others. If eventually someone finds a mistake in something you said or wrote that’s cool, you can learn from your mistakes. If you don’t share your insights there’s little room for improvement and learning. No-one will blame you for being wrong. All of us are eventually wrong about something, and that’s okay.

The Inversion of Control Pattern (IoC)

Hi there! It’s me, remember me?

It’s been a while, a lot of things have happened; good things, no worry about that… I just forgot about this blog and recently I read some good stuff about something that I could use to make a good article… so here we are!

Today I’ll talk about a pattern I’ve been using for more than one year and a half and it’s a proven usefulness: The inversion of control pattern (IoC).

Definition

When a class (say ClassA) depends on other class (say ClassB) and it needs some instance of it in order to do its job, we say that ClassA is coupled with ClassB. If ClassA needs to know a lot of details of ClassB then we say it’s tightly coupled.

public class ClassB
{
    public void SomethingCool() { ... }
}

public class ClassA
{
    private ClassB svc;
    public ClassA()
    {
        svc = new ClassB();
    }

    public void DoSomethingInteresting()
    {
        svc.SomethingCool();
    }
}

Coupling is a problem for two reasons, mainly:

  • Changes in ClassB have the risk of breaking ClassA.
  • Designs are not flexible and it’s difficult to modify ClassA so that it uses the services provided by ClassB differently.

Normally, the best way to reduce coupling consists of introducing some abstraction layer between both, implemented by some abstract class or interface, so that ClassA depends on ClassB in those abstraction terms.

public interface ICoolService
{
    void SomethingCool();
}

public class ClassB : ICoolService
{
    public void SomethingCool() { ... }
}

public class ClassA
{
    private ICoolService svc;

    public ClassA()
    {
        svc = new ClassB();
    }

    public void DoSomethingInteresting()
    {
        svc.SomethingCool();
    }
}

This solves almost everything, because ClassA still needs to determine what implementation of the abstraction will it use.

Moving out of the dependant class (ClassA) the creation of the dependency (ClassB) is what the IoC (Inversion of Control) Pattern tries to solve. The name reflects what it does: invert the control bewteen the «dependee» and the so called «depender» (ClassA -> ClassB).

IoC is an abstract pattern that doesn’t explain how to achieve our purpose. Two common ways to implement it are:

  • Service Locator
  • Dependency Injection

The Service Locator Pattern (SL)

The aim of this pattern is to achieve IoC making the software components get their dependencies from an external component called Service Locator (SL).

There are two kinds of SL:

Strongly typed

public interface IServiceLocator
{
    ICoolService GetServiceB();
}

and then:

public class ClassA
{
    private ICoolService svc;

    public ClassA(IServiceLocator sl)
    {
        svc = sl.GetServiceB();
    }

    public void DoSomethingInteresting()
    {
        svc.SomethingCool();
    }
}

The code who instantiates ClassA needs to provide an instance of a service that implements IServiceLocator. This interface returns elements of services interfaces: IService1, IService2,… to that all client classes (such as ClassA) are loosely coupled to the services. As you can see, we delegated the problem of instancing some precise ClassB on the SL.

The problem of this approach is that the IServiceLocator is not flexible regarding the services it provides. Adding new services after it was defined may require to modify some existing classes in order to use those services.

Weakly typed

public interface IServiceLocator
{
    TService GetServiceB<TService();
}

so:

public class ClassA
{
    private ICoolService svc;

    public ClassA(IServiceLocator sl)
    {
        svc = sl.GetServiceB<ICoolService>();
    }

    public void DoSomethingInteresting()
    {
        svc.SomethingCool();
    }
}

Thus, the SL can provide new services without knowing about them ahead of time, reducing maintenance.

On the other hand, the weakly-typed SL can do little custom configuration for each service provided, since it is so generic.

The Dependency Injection Pattern (DI)

Instead of having an intermediary object that handles all the dependencies, these are directly received during instanciation as constructor parameters. This kind of DI is called Constructor Injection.

public class ClassA
{
    private ICoolService svc;

    public ClassA(ICoolService someSvc)
    {
       svc = someSvc;
    }

This way dependencies are explicit int he constructor and, by reading the it, we can figure out what dependencies have the whole ClassA.

Contrariwise, when using the SL we need to read the whole ClassA looking for the SL uses to get a picture of the dependencies ClassA has.

An alternative to the Constructor Injection is the Property Injection, which consists of defining in the client class (ClassA) a property of the interface implemented by the service it depends on and rely on someone filling it.

public class ClassA
{
    private ICoolService svc;

    public ICoolService CoolService
    {
        get { return svc; }
        set { svc = value; }
    }
}

The constructor is cleaner -or even redundant- since it doesn’t require all the services stuff.

The problem with this implementation is that we trust that someone will provide the property instanciated, otherwise a null exception will be raised… unless we protect every single access to the service checking for null, which could hide bugs in the code.

That’s why Constructor Injection is strongly recommended.

Dependency Injection Containers

There’s some unanswered question still:

how we provide the implementations of the dependencies in the first place?

So far we’ve moved the dependencies out of the business code but that’s just… moving the problem to a different place instead of solving it. If the whole system bases on DI then someone will provide implementations for… everyone else.

Here’s where Dependency Injection Containers come in place. DI Containers are libraries that implement what we need. They provide an API to define who implements what, so that when someone needs the what, some who is provided by the container. It’s like a SL extracted in an external library.

Testing

Testing gets huge benefits of using IoC, since we can mock dependencies and inject them in the code by just setting up a little bit the DI Container to tell it that, when testing, the implementations are provided by the mocks and not the actual implementations. Therefore, with no fluff in code we can easily test it.

Stack Browser app

Hi, there, it’s been a very long time since the last post… I promise to write something else more often and keep this good ol’ blog updated more frequently…

Today I’m glad to announce that I’ve just published my first Android app: Stack Browser!

Questions

Main Stack Browser screen

What stackoverflow is

Stack Browser is intended for software engineers that regularly use stackoverflow. For you to know, this website is probably the most well-known Q&A website regarding computing. It was created several years ago by Joel Spolsky (FogBugZ, among others) and Jeff Attwood (Coding Horror). Since it was started back in 2008, stackoverflow has become the most popular Q&A website and almost anything you google related to programming you probably will be targeted to this site. Due to its success, lots of sibling websites have appeared since then, all related to the idea of someone asks something and the community answer, and both questions and answers are promoted by users, so users get high rankings and their geek-ego becomes increased (I’m just kidding). Nowadays, all those sites are grouped by the StackExchange concept (yeah, let’s say conecpt or idea).

Bookmarks

Bookmarks management.

 What Stack Browser is

So, this non-official app lets users browse (actually, browse) stackoverflow questions and answers. This is my first release of my first app, so the set of features is rather small yet:

  • Browse different categories of latest questions: active, most viewed / answered, featured, new, unanswered, most popular in the week / month
  • Filter by some term the results listed.
  • Search on the website by title, tags and answers.
  • Read the detail of the questions and answers.
  • Bookmark your favourite questions to read them later.
Answers

Detail of a question and its answers.

My first aim in this release was to make easy for users browsing questions related to some specific term or issue, read answers and keep them in a handy place to seek them later. I’m not that interested currently in: show who wrote what, posting or voting. I was interested in the contents, going to the point.

There are other apps in Google Play available, most of them are not very useful or are just web browser embeeded in Android applications; a couple of them are very nice but they don’t make the searching or reading a topic as easy as I’d like to see in a stackoverflow app.

My concern was to provide an app that makes reading a topic easy in a reduced-size screen.

 

Answers-expanded

Question expanded.

Tablets are one of my future targets, but first things first, I wanted an app in my smartphone to read stackoverflow questions in an easy way.

During the development of this app, I had to figure some possible solutions out to provide an acceptable way of reading questions and answers: one of those features was to let users expand the question to use all the available space, to show an answer in a different activity when it’s touched and to provide different layouts for portrait and landscape orientations.

That said, I’d like to develop more features that I have planned for the future, if users rate this app and like it. In the meantime I’ll be developing new Android applications, but if I get some acceptance by the community, these are the features I’d like to develop in the near future:

Answers-expanded-landscape

Landscape layout for a specific question.

  • Tablet specific layouts to fully use all the space available.
  • Translate the app into spanish, german…
  • Log in and voting features (cool!)
  • List users and their profiles.
  • List tags and information about them.
  • Support for other Stack Exchange sites (cool!)

So, I hope you like this app, download it, rate it and send  us feeback via Facebook,Twitter, Google+ or rate it in Google Play, or leave a comment just below!

La Ley de Servicios Profesionales y la Informática

La polémica ley de la que actualmente existe un borrador pero que en próximas fechas debería confirmarse, ha suscitado muchas polémicas. En este artículo trataré de aclarar y de informar, en la medida de lo que conozco, de cómo la LSP (Ley de Servicios Profesionales) nos afecta a la Ingeniería en Informática.

Lo que dice la ley

En esencia la ley dice que se elimina la mayoría de atribuciones profesionales. Esto significa que no hará falta que un ingeniero industrial preste su firma para un proyecto de construir una nave, la de un arquitecto para la construcción de una casa o la de un ingeniero de puentes para construir un puente. Se conservan algunas atribuciones muy específicas pero las más generales se suprimen, entendiéndose que no es necesario ser ingeniero en A para firmar un proyecto A, sino que es suficiente conocer los entresijos de dicho proyecto y tener cierta experiencia.

Esto, que en principio puede sonar muy bárbaro (¡¿ingenieros de montes firmando proyectos de construcción de puentes?!) quizá no lo sea tanto porque se conservan ciertas atribuciones específicas y se eliminan las demás (desconozco qué se conserva y qué se elimina), y la idea es, quizá, noble: reducir la carga burocrática y abaratar costes a la hora de realizar trámites para llevar a cabo obras. De ser así, ojalá no afectara sólo a ingenierías y arquitecturas, sino también a la abogacía y la notaría, ¿verdad?

Cómo nos afecta a los Ingenieros en Informática

En dicho escrito se establece que los profesionales de la informática podemos seguir con la nomenclatura de «Ingenieros», si bien no se nos incluye en el reglamento que se establece para las demás ingenierías. Y aquí está la enjundia del asunto, amigos, en que no estamos con las ingenierías mayores, con casta y solera. Y de lo malo podemos darnos con un canto en los dientes: otras ingenierías nuevas han perdido hasta el trato de ingenierías…

¿De dónde viene todo esto?

De la «Ley Guerra» de 1986, llamada así por ser establecida por el entonces ministro Alfonso Guerra. Dicha ley regulaba las competencias de las distintas ingenierías; la Ingeniería Informática no existía aún en España, los pocos profesionales del sector no estaban aún organizados y por tanto no levantaron la mano para hacerse oír.

Por aquél entonces se estableció la regularización, mientras que ahora se trata de hacer lo contrario: des-regularización de atribuciones.

¿Cuál es el verdadero problema?

El problema real está en el hecho de que no se nos trate como a las demás ingenierías, en el hecho de que seamos tratados como una ingeniería «menor». Actualmente este es el problema, dado que la desregularización de las atribuciones en principio no nos afecta en absoluto. Sin embargo, esta situación actual abre las puertas a muchos problemas futuros:

  • Si en el futuro vuelven las tornas y se decide re-regularizar las atribuciones (ya se hizo y probablemente se vuelva a hacer), nosotros que no estamos en el saco nos quedaremos fuera, y posibles atribuciones como por ejemplo Seguridad Informática, Criptografía o Gestión de Proyectos quedarán fuera de nuestras manos y pasarán a estar controladas por ingenieros industriales o de telecomunicación (sigh). Dado que nosotros no nos metemos a construir puentes pues pueden imaginarse lo que significa dejar en manos de personal no cualificado tareas de tamaña relevancia.
  • El hecho de estar menospreciados de una u otra manera afectará a que muchos futuros profesionales se decidan por otros estudios más prestigiosos, tendremos una fuga de cerebros en nuestro sector y dado que el área de las tecnologías es posiblemente el área con mayor futuro en España y en el mundo sería una pérdida irreparable para el país y para el avance de nuestra sociedad global.
  • Trato, respeto, igualdad… esas cosas básicas que se presuponen a todo individuo y a las que a veces hay que apelar para que te escuchen…

¿Quién es el malo de la película?

Esta ley empezó a barruntarse en el anterior gobierno socialista, y tiene su continuidad en el actual gobierno popular. Como he señalado antes, la ley no es ni buena ni mala, eso el tiempo lo dirá y el caso es más complejo de lo que parece. El asunto está en que nosotros nos quedamos fuera, y España no puede dar la espalda a más de 150.000 profesionales punteros en hacer avanzar al país. Para que se hagan una idea, los ingenieros de minas, que sí están en el grupo de los Elegidos, no pasan de 5.000.

De hecho hay políticos socialistas, de UPyD y populares que nos apoyan, comprenden la situación e incluso comprenden el escarnio que el actual escrito nos supone.

El verdadero problema está en los tecnócratas, los lobbys de poder que se encuentran apoltronados en los puestos relevantes de la Administración y no dan su brazo a torcer. Son éstos los que tienen la influencia para aconsejar y decidir el escrito final y que no quieren que nadie les quite su jugoso pastel. Estos lobbies están conformados por ingenieros industriales, de caminos, de telecomunicación… que dicen que estos chicos de los ordenadores no tienen nada que hacer, no son ingenieros y que cualquiera puede desempeñar su trabajo.

Dado que la labor desempeñada por la Ingeniería en Informática se encuentra fuertemente regida por procedimientos ingenieriles heredados y adaptados de otras ingenierías y no se trata de procedimientos científicos o artísticos, creemos firmemente (y así los planes de estudios lo estipulan) que somos tan ingenieros como otro cualquiera.

¿Qué es lo que queremos, entonces?

Que se nos trate como a las demás ingenierías. Ni más, ni tampoco menos. Queremos estar donde el resto de las ingenierías estén.

¿Qué podemos hacer?

Difundir el mensaje. Poner en conocimiento de todos los profesionales de la Ingeniería en Informática que esto es un problema que nos afecta a todos; que no nos va a quitar el plato de lentejas de cada día, pero que podemos perder una oportunidad CLAVE para futuras decisiones que atañen a nuestro sector y que nos competen, por simple y llana justicia.

Por encima de nosotros están los Colegios de Informática y distintas asociaciones y grupos de presión que tratan de hablar con políticos, difundir el mensaje y conseguir apoyos; la barrera principal no obstante radica en los lobbys que antes mencionaba.

Conclusión

En próximas fechas el escrito dejará de ser un borrador para convertirse en ley. El grupo socialista trató de confirmar el escrito la pasada legislatura, pero a última hora se echó atrás dado el bajo consenso y nuestras protestas en la calle. Así que lo dejó para el siguiente Gobierno y se quitó el marrón de encima. Ahora toca decidir, y ahora es cuando nos la jugamos.

Howto: Execute a command and use the output in Python

Let’s do something very simple to show a couple of things:

  • How to script Plastic SCM commands;
  • How to execute a program in Python and do something useful with the output.

Say that you want to check if there’s been some activity in a specific Plastic SCM server in a period of time. To do that, a good idea is to check if new changesets have been created. But let’s suppose that you have hundreds of repositories, so you really don’t want to check ‘em all one by one, do you?

Right, so let’s write some Python code to achieve this:

from subprocess import Popen, PIPE

proc = Popen(["cm", "lrep", "--format={1}"],
    stdout=PIPE, stderr=PIPE)

repositories = proc.stdout.readlines()

for repository in repositories:
    repository = repository.replace('\n','')
    print "Repository: %r" % repository
    proc = Popen(
        ["cm",
         "find",
         "changesets where date > '01/01/2013' on repository '"+
         repository + "@localhost:8084'"
        ],
        stdout=PIPE, stderr=PIPE)
    print proc.stdout.read()
    print "------------------------------"

First, we import the required library we need to execute subprocesses and redirect output and errors. This is a quite common library. Then we create a child process to execute the command «cm lrep», to get the list of repositories in that server. The format string is just to cut the output in order to get just the names of the repositories.

See that we are redirecting the standard output and the standard errors to a PIPE. Then we get the output of the command. This is one of the beauties of Python: with just one single line you get and parse in an array the different lines of the output :-).

Now we iterate in the list of repositories we’ve just got, remove the EOL character and execute the proper command to query if there have been new changesets in the last year (this could have been calculated with the current date, or could have been an argument to the script).

We execute the cm find changeset command and get the output, and just print it in the console.

Un relato corto

Año 2072. Como cada mañana, Fred se despertó a las 8 de la mañana. Había tenido una noche revuelta, probablemente debido a que estaban en primavera, el tiempo era bastante inestable y soñaba mucho. Se vistió rápidamente, tomó un café con un par de galletas, agarró su All-In-One y se fue a la calle, camino del trabajo.

Un All-In-One era un dispositivo que hacía las veces de teléfono móvil, radio, dispositivo de audio, cámara, calendario, gestor de eventos, sincronizador de redes sociales y otras mil cosas inútiles más. Más o menos como ahora, pero más pequeño y sin pantalla, ya que funcionaba a base de una especie de hologramas que uno controlaba con los dedos, describiendo ciertos movimientos en el aire.

Fred trabajaba como máximo responsable del departamento de IT en MeetLove, el Meetic de la época. Su creador, John Marrison había patentado hacía diez años una idea innovadora y rompedora: un sistema informático que, en base a parámetros sociológicos muy definidos y sesudos del cliente en cuestión, realizaba una búsqueda de la pareja perfecta, gracias a un algoritmo extremadamente complejo que se ejecutaba en un servidor de máxima capacidad, cuya responsabilidad era en estos momentos la ocupación de Fred.

En el año 2072 el ritmo de vida de las personas había provocado más divorcios y rupturas que nunca antes en la historia; la gente era más independiente, pero también más infeliz. Las personas no estaban dispuestas a aguantar en absoluto las manías o costumbres de otra persona, y a la menor discusión uno de los dos agarraba la puerta y se iba, y a empezar de nuevo. Muchos hombres y mujeres fallecían sólos en residencias privadas, pagadas con mucho dinero, y en muchos casos sin hijos. Precisamente la idea de John, que inteligentemente decidió patentar, cubrió una necesidad que muchos añoraban, o que soñaban cuando las oían contar a los más viejos: el amor, la fidelidad, el respeto, la complicidad de la pareja.

Desde entonces, y gracias al sistema de John, en los últimos diez años el número de divorcios se había reducido a tan sólo un 5% del total de nuevas parejas, o al menos eso decían ellos. «¿Para qué perder el tiempo con personas que no merecen la pena? ¡conoce a tu media naranja sin sufrir despechos y rupturas!». Tal fue la campaña publicitaria que el mismo John personificó en los paneles publicitarios de los edificios que inundaban las ciudades.

Y la idea cuajó, vaya si cuajó. Los beneficios de John se contaban por millones y la gente era feliz. Pagabas, te sometías al procedimiento, y encontrabas a tu pareja perfecta. La gente acudía a su primera cita con la convicción del que se sabe ganador. Todo era más fácil. Nada podía fallar, el sistema era infalible. Muchos intentaron descifrar el algoritmo, sin éxito: el secretismo que lo protegía era absoluto, el servidor que ejecutaba la búsqueda estaba protegido por las máximas medidas de seguridad.

Aquella mañana, Fred había tenido una pesadilla. Algo no iba bien. Su antecesor en el cargo, el propio John Marrison en persona le había indicado que el sistema era tan estable que su trabajo sería coser y cantar, apenas le daría trabajo y casi nunca tendría que revisar el funcionamiento del sistema. En el sueño, el sistema se caía, el servidor dejaba de funcionar y todo era un auténtico caos, él era despedido fulminantemente y todo el mundo se reía de él.

Es por ello que aquella mañana se vistió rápido y fue casi corriendo a su trabajo, sin tomar el transporte urbano. Cuando llegó a su puesto, tuvo que revisar sus notas para consultar cómo se accedía al sistema central, el sistema más importante, el que ejecutaba el algoritmo de búsqueda. Se autenticó en los distintos niveles de protección del sistema hasta que llegó al núcleo. Entonces, se quedó pálido cuando descubrió que el núcleo estaba caído. Tras varios minutos de desesperada búsqueda, entendió que dicho algoritmo nunca existió, la máquina había operado en vacío los últimos diez años.

Fred comprendió la jugada de su jefe, y sonrió con una mueca de ironía. Miró la foto que colgaba de su despacho; en ella, John aparecía radiante, con los brazos cruzados, en una actitud de confianza y desafiante a la vez. «Qué cabrón», pensó. Se levantó y se preparó un buen café.

Cómo conseguir realizar tus metas (IV y final)

El enfoque vertical

Como hace bastante que presentamos este enfoque y todavía no habíamos vuelto a ello, vamos a recordar brevemente en qué consistía el enfoque vertical: se trata de cómo enfocar un proyecto dado, desde su concepción a su culminación.

El enfoque vertical consiste en cómo mantener un proyecto bajo control, identificar una solución o asegurar que se han determinado todos los pasos correctos. ¿Y cómo lograrlo? Pues como suele ocurrir… de la forma más natural, que curiosamente es la menos frecuente.

Nuestro cerebro funciona de la siguiente forma:

  1. Define propósito y principios: primero el qué y el porqué.
  2. Visualiza el resultado: ¿a dónde pretendes llegar?
  3. Brainstorming: Ahora piensa en qué cosas tienen que suceder para llegar a tu meta.
  4. Organiza: De lo que has pensado, ¿qué sirve y qué no?
  5. Identifica las siguientes acciones: de lo que sirve, ¿en qué orden?

Mejor que ponerte directamente a hacer, piensa primero qué hacer y cómo lo vas a hacer. ¿Cómo organizarías una fiesta? Si te gusta la cocina, ¿cómo prepararías tu receta favorita? La clave está en definir el objetivo, visualizar el resultado, verte a tí mismo realizando el proyecto, porque así es más sencillo decidir cuáles son los pasos que te llevarán al éxito y qué descartar.

Lo más importante es empezar bien desde el principio: definir el qué y el porqué: define en qué consiste el éxito del proyecto, clarifica el objetivo, crea criterios de decisión. Si estás trabajando en equipo entonces motiva, y expande opciones. Para esto último es muy útil el brainstorming.

Lo mejor para tener una buena idea es tener muchas ideas, y nada hay más peligroso que una idea cuando es la única que tenemos.

Cuando utilices brainstorming, ten en cuenta lo siguiente:

  • No juzgues, retes, evalúes o critiques.
  • Busca cantidad, no calidad.
  • Deja el análisis y la organización para otro momento.

La importancia de la carga cognitiva

Un detalle muy importante cuando estés trabajando en tus proyectos es utilizar carga cognitiva. Esto es: utiliza algo físico que te agarre a la idea y que no permita que te desconcentres. Cuando estás pensando en un proyecto, el cerebro tiende a desconcentrarse a los 60 segundos. Es por ello que jugamos con una pelota, paseamos por la habitación, pintamos en un papel… todos ellos son signos para indicar a nuestro cerebro que se mantenga trabajando en ello y no se desvíe. Intenta pensar en algo cruzado de brazos, sentado quieto y mirando al techo; verás que te costará bastante más concentrarte y no cambiar a otra cosa.

Un consejo: utiliza boli y papel para escribir todo lo que se te ocurra. No importa que sean palabra sueltas, esquemas, flechas, tachones, dibujos… lo importante es tener un ancla para tu cerebro que lo mantenga trabajando en la idea. Sugiero boli y papel, pero puede ser lápiz, puede ser el ordenador o puede ser dibujar con un palo en la arena. Lo importante es utilizar carga cognitiva, algo que nos mantenga agarrados a la idea.

Organiza tus ideas

Una vez tienes todas las ideas delante de tí, identifica relaciones naturales y de estructura:

  • ¿Qué cosas deben ocurrir para obtener el resultado final?
  • ¿En qué orden?
  • ¿Cuál es el elemento más importante para asegurar el éxito?

Una vez hecho esto, decide los siguientes pasos. Defínelos hasta el más mínimo detalle, escríbelos y asegúrate de que no olvidas nada. Una pregunta: ¿quién lo va a hacer? Si no eres tú, añade las acciones pertinentes a la lista de En Espera.

Si no tienes claras las siguientes acciones, vuelve atrás, reorganiza, nuevo brainstorming, hasta que veas claros los siguientes pasos. Éstos deberían acabar surgiendo de forma natural. Tómate tu tiempo.

Algunos consejos finales

Vamos a hablar un poco del entorno de trabajo.

Lo primero es que tu entorno de trabajo no debería ser compartido, porque lo que para tí está bien para otra persona no tiene por qué ser adecuado.

Ten todo lo que necesitas al alcance de la mano. Fíjate que si compartes tu entorno de trabajo esto se complica, porque las cosas que necesitas tú probablemente no son las que necesita otro. No compartas tus herramientas: que no te ocurra que cuando vas a necesitar algo no lo encuentres donde debería estar. Esto parece muy riguroso, pero si has leído hasta aquí atentamente entenderás por qué esto es importante.

Antes de ponerte a la tarea, piensa en todas las cosas o recados que tengas pendientes y hazlos ya, antes de ponerte a trabajar. O bien planifica y calendariza claramente cuándo lo vas a hacer. Cinco minutos pueden salvarte una sesión en la que constantemente te distraes por hacer cosas pendientes más urgentes, o que aunque esas cosas no sean urgentes realmente te distraigan porque te ronden la mente en el momento menos oportuno.

 

Y esto es todo. Espero que hayáis encontrado interesante esta serie de artículos. Si os animáis a leer el libro ya sabéis dónde encontrarlo. He tratado de resaltar lo más importante, pero probablemente me haya dejado cosas y seguramente algunas cosas de las que he contado carecen de interés para otros, aunque para mí sí lo tengan. Al final lo importante es interpretar, reflexionar, juzgar y adaptar el conocimiento que recibimos a nuestras necesidades, para aplicarlos de la forma más cómoda para cada uno. Muchas gracias por los comentarios recibidos y si tenéis alguna duda más será un placer para mí atender vuestras dudas, espero saber resolverlas.


Fuente:
David Allen, Getting Things Done